What if we have more than one variable that is evolving through time and space?
populations of two species that are changing through time, and interact with each other
pollutant concentrations in a diffusion-advection model where the two pollutants also interact with each other
growth of above-ground and below-ground carbon and nitrogen in a forest
household consumption and savings through time (when consumption depends on savings and vice versa)
These require several differential equations that have to be solved simultaneously
Systems of equations
can be ordinary (differentiating with respect to one variable (time or space))
partial if differentiating with respect to multiple variables (x,y,z, time)
We can estimate trajectories of systems of equations in much the same way that we used numerical integration for our ODE’s
Here again methods (especially for complicated parital derivative system of equations ) can be complicated
Call your engineering/math when you run into issues!
two variable dynamic models that have feedbacks between variables can create cyclical dynamics (and more complex )
Two ways to look at results
time series of each state variable
how state variables interact with each other
Predator-Prey models
A simple approach that assumes prey grow exponentially, with a fixed intrinsic growth rate
Analogs
As with diffusion, the basic form/ideas in this model can be applied elsewhere
economics
infectious disease spread
combustion
\(\frac{\partial prey}{\partial t} = r_{prey} * prey - \alpha * prey * pred\)
\(\frac{\partial pred}{\partial t} = eff * \alpha * pred * prey - mort * pred\)
Ordinary Differential Equation with two dependent variables
Still use ODE solve in R
Still code the derivative as a function
Use lists or vectors to bring in initial conditions for all dependent variables; (similar how we bring in multiple parameters to derivative definition function)
use with can help make it easier to code the use of parameters within the derivative definition function (see example below)
use lists to output derivatives for all dependent variable
## function (t, pop, pars)
## {
## with(as.list(c(pars, pop)), {
## dprey = rprey * prey - alpha * prey * pred
## dpred = eff * alpha * prey * pred - pmort * pred
## return(list(c(dprey, dpred)))
## })
## }
two variable dynamic models that have feedbacks between variables can create cyclical dynamics (and more complex )
Two ways to look at results
time series of each state variable (pred and prey)
how state variables interact with each other
## time prey pred
## [1,] 1 10.000000 1.000000
## [2,] 2 11.320956 1.564997
## [3,] 3 10.244569 2.482924
## [4,] 4 6.914805 3.420960
## [5,] 5 3.777091 3.836373
## [6,] 6 1.987468 3.710744
# rearrange for easy plotting
resl = as.data.frame(res) %>% pivot_longer(-time, names_to="animal", values_to="pop")
p1=ggplot(resl, aes(time, pop, col=animal))+geom_line()
p1# To make this easier to understand - maybe
p2b=ggplot(as.data.frame(res), aes(pred, prey, col=time))+geom_point()+labs(y="Prey",x="Predators")
p2b#Try other parameters - try to bring relative size of predictors (versus prey) higher
\(\frac{\partial prey}{\partial t} = r_{prey} * prey - \alpha * prey * pred\)
\(\frac{\partial pred}{\partial t} = eff * \alpha * pred * prey - mort * pred\)
Predator and Prey with Carrying Capacity?
How would you code that?
\(\frac{\partial prey}{\partial t} = r_{prey} * (1-\frac{prey}{K})*prey - \alpha * prey * pred\)
\(\frac{\partial pred}{\partial t} = eff * \alpha * pred * prey - mort * pred\)
## function (t, pop, pars)
## {
## with(as.list(c(pars, pop)), {
## dprey = rprey * (1 - prey/K) * prey - alpha * prey *
## pred
## dpred = eff * alpha * prey * pred - pmort * pred
## return(list(c(dprey, dpred)))
## })
## }
# initial conditions
currpop=c(prey=1, pred=1)
# set parameter list
pars = c(rprey=0.1, alpha=0.6, eff=0.8,pmort=0.4, K=20)
# times when you want to evaluate
days = seq(from=1,to=500)
# run our differential equation solver
res = ode(func=lotvmodK, y=currpop, times=days, parms=pars)
# rearrange for plotting
resl = as.data.frame(res) %>% pivot_longer(-time, names_to="species", values_to="pop")
# graph both populations over time
p1=ggplot(resl, aes(time, pop, col=species))+geom_line()
p1# also look at relationships between preditor and prey population and use color for time
# I will remove the legend here to make it easier to see
p2 = ggplot(as.data.frame(res), aes(pred, prey, col=(round(time/10))))+geom_point()+theme(legend.position = "none")
p2p2 = ggplot(as.data.frame(res), aes(pred, prey, col=as.factor(round(time/10))))+geom_point()+theme(legend.position = "none")
p2Species 1 (or Company 1)
\(\frac{\partial s_1}{\partial t} = r_{1} * s_1 * (1-(\frac{s_1+\alpha_{12} * s_2}{K_1}))\)
Species 2 (or Company 2)
\(\frac{\partial s_2}{\partial t} = r_{2} * s_2 * (1-(\frac{s_2+\alpha_{21} * s_1}{K_2}))\)
How might you explain what this is doing?
Species 1 (or Company 1)
\(\frac{\partial s_1}{\partial t} = r_{1} * s_1 * (1-(\frac{s_1+\alpha_{12} * s_2}{K_1}))\)
Species 2 (or Company 2)
\(\frac{\partial s_2}{\partial t} = r_{2} * s_2 * (1-(\frac{s_2+\alpha_{21} * s_1}{K_2}))\)
Consider pred-prey BUT what will be the output - if we want to ‘quantify sensitivity’ useful to look at a single value or set of value
For example
NOTE - I’ve also given you an example using Latin Hypercube Sampling * optional review
source("../R/lotvmodK.R")
# lets start with sobol
library(sensitivity)
# want to learn about sensitivity to growth rate (r) and carrying capacity
# set the number of parameters
np=200
K = rnorm(mean=150, sd=20, n=np)
rprey = runif(min=0.01, max=0.3, n=np)
alpha= runif(min=0.1, max=0.4, n=np)
eff= rnorm(mean=0.3, sd=0.01, n=np)
pmort= runif(min=0.01, max=0.45, n=np)
X1 = cbind.data.frame(rprey=rprey, K=K, alpha=alpha, eff=eff, pmort=pmort)
# repeat to get our second set of samples
np=200
K = rnorm(mean=150, sd=20, n=np)
rprey = runif(min=0.01, max=0.3, n=np)
alpha= runif(min=0.1, max=0.4, n=np)
eff= rnorm(mean=0.3, sd=0.01, n=np)
pmort= runif(min=0.01, max=0.45, n=np)
X2 = cbind.data.frame(rprey=rprey, K=K, alpha=alpha, eff=eff, pmort=pmort)
# create our sobel object and get sets ofparameters for running the model
sens_PP = sobolSalt(model = NULL,X1, X2, nboot = 300)
# name parameter sets...
colnames(sens_PP$X) = c("rprey","K","alpha","eff","pmort")
# our metrics
# lets say we want the maximum and minimum of both predictor and prey
compute_metrics = function(result) {
maxprey = max(result$prey)
maxpred = max(result$pred)
minprey = min(result$prey)
minpred = min(result$pred)
return(list(maxprey=maxprey, minprey=minprey, maxpred=maxpred, minpred=minpred))}
# build a wrapper function
p_wrapper = function(rprey,alpha, eff, pmort, K, currpop, days, func) {
parms = list(rprey=rprey, alpha=alpha, eff=eff, pmort=pmort, K=K)
result = ode(y=currpop, times=days, func=func, parms=parms)
colnames(result)=c("time","prey","pred")
# get metrics
metrics=compute_metrics(as.data.frame(result))
return(metrics)
}
# run our model for all parameters and extract the results
currpop=c(prey=1, pred=1)
days = seq(from=1,to=500)
allresults = as.data.frame(sens_PP$X) %>% pmap(p_wrapper, currpop=currpop, days=days, func=lotvmodK)
# take results back to unlisted form
allres = allresults %>% map_dfr(`[`,c("maxprey","minprey","maxpred","minpred"))
# range of response across parameter uncertainty
allresl = allres %>% gather(key="metric",value="pop")
ggplot(allresl, aes(metric, pop))+geom_boxplot()# dealing with different scales
ggplot(allresl, aes(metric, pop, col=metric))+geom_boxplot()+facet_wrap(~metric, scales="free")# plot cummulative densities
ggplot(allresl, aes(pop, col=metric))+stat_ecdf(geom="line")+facet_wrap(~metric, scales="free")# create sobol indices for Max Prey
sens_PP_maxprey = sens_PP %>% sensitivity::tell(y=allres$maxprey)
rownames(sens_PP_maxprey$S)=c("rprey","K","alpha","eff","pmort")
sens_PP_maxprey$S## original bias std. error min. c.i. max. c.i.
## rprey -0.001040218 -0.005376017 0.06962958 -0.1272410 0.1566899
## K -0.026442878 -0.006382507 0.06589776 -0.1443402 0.1159856
## alpha 0.367479562 -0.010644855 0.08110274 0.2265306 0.5361740
## eff -0.018977584 -0.006256640 0.06574709 -0.1359716 0.1229642
## pmort 0.424162832 -0.004959283 0.05939275 0.3224176 0.5506333
## original bias std. error min. c.i. max. c.i.
## rprey 0.0144344248 4.963348e-04 0.0031179428 0.0065729415 0.019003635
## K 0.0006946637 -2.916024e-05 0.0002751955 0.0001713897 0.001220865
## alpha 0.4738299381 5.048039e-03 0.0543045341 0.3559219145 0.574759634
## eff 0.0041277848 9.552535e-05 0.0006394799 0.0026076921 0.005210216
## pmort 0.6895652003 1.041685e-02 0.0801469067 0.5099210137 0.820459876
Here’s an example using Latin HyperCube Sampling
With Partial Rank Correlation Coefficient
## Loading required package: survival
## Package epiR 2.0.74 is loaded
## Type help(epi.about) for summary information
## Type browseVignettes(package = 'epiR') to learn how to use epiR for applied epidemiological analyses
##
# create parameter sets...
factors = c("rprey","K","alpha","eff","pmort")
nsets=500
# create a latin hyper cube
sens_pp = randomLHS(nsets, length(factors))
colnames(sens_pp)=factors
# refine using sampling distributions for each parameter
sens_pp[,"rprey"]= qunif(sens_pp[,"rprey"], min=0.01, max=0.3)
sens_pp[,"K"]= qunif(sens_pp[,"K"], min=10, max=200)
sens_pp[,"alpha"]= qunif(sens_pp[,"alpha"], min=0.1, max=0.4)
sens_pp[,"eff"]= qnorm(sens_pp[,"eff"], mean=0.3, sd=0.01)
sens_pp[,"pmort"]= qunif(sens_pp[,"pmort"], min=0.05, max=0.45)
# lets create a metric and wrapper function as we did for our population models
# first our metrics
# lets say we want the maximum and minimum of both predictor and prey
compute_metrics = function(result) {
maxprey = max(result$prey)
maxpred = max(result$pred)
minprey = min(result$prey)
minpred = min(result$pred)
return(list(maxprey=maxprey, minprey=minprey, maxpred=maxpred, minpred=minpred))}
# build a wrapper function
p_wrapper = function(rprey,alpha, eff, pmort, K, currpop, days, func) {
parms = list(rprey=rprey, alpha=alpha, eff=eff, pmort=pmort, K=K)
result = ode(y=currpop, times=days, func=func, parms=parms)
colnames(result)=c("time","prey","pred")
# get metrics
metrics=compute_metrics(as.data.frame(result))
return(metrics)
}
# run our model for all parameters and extract the results
currpop=c(prey=1, pred=1)
days = seq(from=1,to=500)
allresults = as.data.frame(sens_pp) %>% pmap(p_wrapper, currpop=currpop, days=days, func=lotvmodK)
# take results back to unlisted form
allres = allresults %>% map_dfr(`[`,c("maxprey","minprey","maxpred","minpred"))
# range of response across parameter uncertainty
allresl = allres %>% gather(key="metric",value="pop")
ggplot(allresl, aes(metric, pop))+geom_boxplot()# dealing with different scales
ggplot(allresl, aes(metric, pop, col=metric))+geom_boxplot()+facet_wrap(~metric, scales="free")# plot cummulative densities
ggplot(allresl, aes(pop, col=metric))+stat_ecdf(geom="line")+facet_wrap(~metric, scales="free")# compute PRCCs using epi.prcc
# lets do first for maxpred
epi.prcc(cbind.data.frame(sens_pp, allres$maxpred))## var est lower upper test.statistic df p.value
## 1 rprey 0.544175367 0.47031059 0.6180401 14.47459244 498 6.941227e-40
## 2 K 0.473539962 0.39599495 0.5510850 11.99796640 498 2.611565e-29
## 3 alpha -0.812104720 -0.86347829 -0.7607311 -31.05826029 498 1.366908e-118
## 4 eff 0.002986732 -0.08505493 0.0910284 0.06665195 498 9.468855e-01
## 5 pmort 0.789202777 0.73513327 0.8432723 28.67748433 498 1.621744e-107
## var est lower upper test.statistic df p.value
## 1 rprey 0.78777087 0.7335397 0.84200199 28.540144 498 7.182480e-107
## 2 K -0.06310576 -0.1509723 0.02476081 -1.411075 498 1.588468e-01
## 3 alpha -0.76804270 -0.8244247 -0.71166069 -26.763915 498 1.902907e-98
## 4 eff -0.05309613 -0.1410140 0.03482174 -1.186562 498 2.359660e-01
## 5 pmort 0.68307722 0.6187760 0.74737840 20.871599 498 5.688085e-70
Lorenz Equations (for fluid dynamics), P, R, B are parameters (fixed values), x,y,z variables that change with time that describe how conveciton in the atmosphere works - a cell that is warmed from below and cooled from above
Developed by Meteorologist Edward Lorenz - early climate model development in 1960s
Lorenz equations are example of dynamic systems that can exhibit stable and chaotic states depending on parameters and initial conditions
Lets look at a Lorenz System with its interesting dynamics
# lorenze
source("../R/lorenz.R")
pars = list(a=10,b=28,c=8/3)
res = ode(func=lorenz, c(x=0.1,y=0,z=0), times=seq(0,50,by=0.01), parms=pars)
ggplot(as.data.frame(res), aes(x,y, col=time))+geom_point()resl = as.data.frame(res) %>% gather(key="var", value="value",-time)
ggplot(resl, aes(time, value, col=var))+geom_line()# try with different initial conditions
pars = list(a=15,b=28,c=8/4)
res = ode(func=lorenz, c(x=0.3,y=5,z=10), times=seq(0,50,by=0.01), parms=pars)
ggplot(as.data.frame(res), aes(x,y, col=time))+geom_point()+scale_colour_gradientn(colours = terrain.colors(10))ggplot(as.data.frame(res), aes(x,z, col=time))+geom_point()+scale_colour_gradientn(colours = terrain.colors(10))ggplot(as.data.frame(res), aes(y,z, col=time))+geom_point()+scale_colour_gradientn(colours = terrain.colors(10))resl = as.data.frame(res) %>% gather(key="var", value="value",-time)
ggplot(resl, aes(time, value, col=var))+geom_line()How do we think about stablity for multiple outputs (e.g. predator/prey)
Populations don’t change when derivatives are zero!
What conditions lead to BOTH derivatives being zero
Make dprey and dpred equal to 0 and rearrange
Make dprey and dpred equal to 0 and rearrange
Try setting you initial conditions close to these values and see what happens
source("../R/lotvmodK.R")
# set parameter list
pars = data.frame(rprey=0.1, alpha=0.6, eff=0.8,pmort=0.4, K=200)
# now lets try initial conditions that will be stable
preyi = with(pars, pmort/(eff*alpha))
predi = with(pars, rprey/alpha*(1-preyi/K))
preyi## [1] 0.8333333
## [1] 0.1659722
# times when you want to evaluate
days = seq(from=1,to=500)
# lets first see what happens when we start with 1 of each
currpop=c(prey=1, pred=1)
# run our differential equation solver
res = ode(func=lotvmodK, y=currpop, times=days, parms=pars)
# extract the results
res_smallstart = as.data.frame(res) %>% gather(key="animal", value="pop",-time)
# graph both populations over time
p1=ggplot(res_smallstart, aes(time, pop, col=animal))+geom_line()
p1# lets first see what happens when we start our estimates of stable populations
stablepop = c(prey=preyi, pred=predi)
res = ode(func=lotvmodK, y=stablepop, times=days, parms=pars)
# estract the results
res_stablestart = as.data.frame(res) %>% gather(key="animal", value="pop",-time)
# graph both populations over time
p2=ggplot(res_stablestart, aes(time, pop, col=animal))+geom_line()
p2simple mathematical definition - when derivatives are zero
more complex definitions - think of metrics that looks at ranges, cycling, return to a population above some threshold after disturbance
for more complex definitions of stablity you’d need to do the integration (e.g run the ode solver) and then plot or compute a metric of stability
Consider how you might add hunting of prey to the predator prey model that we’ve been using in class
Build this model (e.g add harvesting to the lotvmodK.R), you should make sure that you don’t hunt more prey than exist. To ensure that you might also add a minumum prey population input that must be met before hunting is allowed.
Explore how different hunting levels and different minimum prey populations (before hunting is allowed) are likely to effect the stability of the populations of both predator and prey. Use this exploration to recommend a hunting target that will be sustainable (e.g leave you with a stable prey and predator population)
You can assume the following rprey=0.95, alpha=0.01, eff=0.6,pmort=0.4, K=2000,
A key challenge is how you might want to define stability? Its up to you but you will need to write a sentence to explain why you chose the measure that you did. It could be something as simple as maintaining a population above some value 50 years into the future.
Its also up to you how you “explore” hunting - you can simply try different values or do it more formally by running your model across a range of values
Submit the Rmarkdown that documents your exploration (e.g how you tested different hunting level and how you defined a stability metric) and provides you estimated sustainable hunting level.